9 research outputs found

    The search for the compactified Kerr solution

    Get PDF
    Due to the complexity of the Einstein equations the general solution to these equations remains unknown. Currently there exist quite a few special solutions, which were obtained by assuming some symmetries of the solution, which allows one to reduce the complexity of these equations. That is one of the reasons why any exact solution is important. It may shed some light on the general problem. There is also demand from string theories for a special type of solutions - compactified solu- tions. String theories use more than 4 dimensions and in order for these theories to make physical sense the extra dimensions must be compactified. Therefore the search for the compactified analogs of the known solutions became an important task. The well known and widely used in physics non compactified solutions are the Schwarzschild [15] and Kerr [11] solutions, which are discussed in detail in Chapter 2 of this thesis. Chapter 2 also provides a description of the compactified analog of the Schwarzschild solution obtained independently by Korotkin and Nicolai [12] and by Myers [14]. However the compactified analog of the Kerr solution remains unknown. In Chapter 3 the asymptotic behaviour of the compactified analog of the Kerr solution is inves- tigated. Two possible ways of solving this problem are discussed

    Self-Supervised Contrastive BERT Fine-tuning for Fusion-based Reviewed-Item Retrieval

    Full text link
    As natural language interfaces enable users to express increasingly complex natural language queries, there is a parallel explosion of user review content that can allow users to better find items such as restaurants, books, or movies that match these expressive queries. While Neural Information Retrieval (IR) methods have provided state-of-the-art results for matching queries to documents, they have not been extended to the task of Reviewed-Item Retrieval (RIR), where query-review scores must be aggregated (or fused) into item-level scores for ranking. In the absence of labeled RIR datasets, we extend Neural IR methodology to RIR by leveraging self-supervised methods for contrastive learning of BERT embeddings for both queries and reviews. Specifically, contrastive learning requires a choice of positive and negative samples, where the unique two-level structure of our item-review data combined with meta-data affords us a rich structure for the selection of these samples. For contrastive learning in a Late Fusion scenario, we investigate the use of positive review samples from the same item and/or with the same rating, selection of hard positive samples by choosing the least similar reviews from the same anchor item, and selection of hard negative samples by choosing the most similar reviews from different items. We also explore anchor sub-sampling and augmenting with meta-data. For a more end-to-end Early Fusion approach, we introduce contrastive item embedding learning to fuse reviews into single item embeddings. Experimental results show that Late Fusion contrastive learning for Neural RIR outperforms all other contrastive IR configurations, Neural IR, and sparse retrieval baselines, thus demonstrating the power of exploiting the two-level structure in Neural RIR approaches as well as the importance of preserving the nuance of individual review content via Late Fusion methods

    Spatial Data Reconstruction via ADMM and Spatial Spline Regression

    No full text
    Reconstructing fine-grained spatial densities from coarse-grained measurements, namely the aggregate observations recorded for each subregion in the spatial field of interest, is a critical problem in many real world applications. In this paper, we propose a novel Constrained Spatial Smoothing (CSS) approach for the problem of spatial data reconstruction. We observe that local continuity exists in many types of spatial data. Based on this observation, our approach performs sparse recovery via a finite element method, while in the meantime enforcing the aggregated observation constraints through an innovative use of the Alternating Direction Method of Multipliers (ADMM) algorithm framework. Furthermore, our approach is able to incorporate external information as a regression add-on to further enhance recovery performance. To evaluate our approach, we study the problem of reconstructing the spatial distribution of cellphone traffic volumes based on aggregate volumes recorded at sparsely scattered base stations. We perform extensive experiments based on a large dataset of Call Detail Records and a geographical and demographical attribute dataset from the city of Milan, and compare our approach with other methods such as Spatial Spline Regression. The evaluation results show that our approach significantly outperforms various baseline approaches. This proves that jointly modeling the underlying spatial continuity and the local features that characterize the heterogeneity of different locations can help improve the performance of spatial recovery

    Optimal Search with Neural Networks: Challenges and Approaches

    No full text
    Work in machine learning has grown tremendously in the past years, but has had little to no impact on optimal search approaches. This paper looks at challenges in using deep learning as a part of optimal search, including what is feasible using current public frameworks, and what barriers exist for further adoption. The primary contribution of the paper is to show how to learn admissible heuristics through supervised learning from an existing heuristic. Several approaches are described, with the most successful approach being based on learning a heuristic as a classifier and then adjusting the quantile used with the classifier to ensure heuristic admissibility, which is required for optimal solutions. A secondary contribution is a description of the Batch A* algorithm, which can batch evaluations for more efficient use by the GPU. While ANNs can effectively learn heuristics that produce smaller search trees than alternate compression approaches, there still exists a time overhead when compared to efficient C++ implementations. This point of evaluation points out a challenge for future work
    corecore